feat(sdk): AI SDK custom useChat transport & chat.task harness#3173
feat(sdk): AI SDK custom useChat transport & chat.task harness#3173
Conversation
🦋 Changeset detectedLatest commit: 6ecf1d7 The changes in this PR will be included in the next version bump. This PR includes changesets to release 30 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a browser-safe chat transport and factory ( Estimated code review effort🎯 5 (Critical) | ⏱️ ~150 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
6530655 to
97f967e
Compare
e16f19c to
b61c052
Compare
b61c052 to
b84db78
Compare
fbc4106 to
169dc4f
Compare
Fixes the last set of issues that were blocking TriggerChatTransport
from running end-to-end against the ai-chat reference. Smoke now
passes: new chat → send → streamed assistant reply in ~4s → second
turn reuses the same session + run, lastEventId advances 10 → 21.
SDK (@trigger.dev/sdk)
- RenewRunAccessTokenParams carries the durable sessionId alongside
chatId + runId. Server-side renew handlers MUST mint the renewed
PAT with read:sessions:{sessionId} + write:sessions:{sessionId}
scopes (in addition to the existing run scopes) — without them,
the first append after expiry 401s on session.in/append and sends
the transport into a renew loop. transport.renewRunPatForSession
looks up the cached sessionId off `this.sessions` so existing
renew callers just need to spread the new field through.
- transport.preload(chatId) on the triggerTask callback path no
longer calls apiClient.createSession from the browser. Matches
sendMessages: when triggerTaskFn is configured the server action
(chat.createTriggerAction) creates the Session with its secret
key and returns sessionId alongside the run PAT. Browser
deployments using the callback flow therefore never need
write:sessions on any browser-facing token.
- chat.test.ts renew-spy assertions updated to match the new
{chatId, runId, sessionId} shape — 86/86 tests still green.
Webapp
- POST /api/v1/sessions gets allowJWT: true + corsStrategy: "all".
Pre-fix, the route rejected any CORS-preflighted browser call,
which broke the transport's direct accessToken fallback path
(sessions.create from the browser).
- POST /realtime/v1/sessions/:session/:io/append now exports both
{ action, loader }. The route builder installs the OPTIONS
preflight handler on the loader; without a loader export, the
preflight returned 400 ("No loader for route") and Chrome
surfaced the follow-up POST as net::ERR_FAILED. Same pattern
already in use on /api/v1/tasks/:id/trigger.
references/ai-chat
- Switch both chat-app.tsx and chat-view.tsx from
accessToken: getChatToken to triggerTask: triggerChat. This path
has the server action create the Session server-side with the
secret key, so the browser never hits POST /api/v1/sessions and
the returned PAT already carries the session scopes needed for
session.in/out.
- renewRunAccessTokenForChat(chatId, runId, sessionId?) now mints
tokens that include read:sessions:{sessionId} +
write:sessions:{sessionId} alongside the run scopes. Both call
sites thread the sessionId from the SDK's renew callback params.
- Drop executeJs / runInSecureSandbox / runInPRReviewSandbox to
decouple ai-chat trigger dev from the isolated-vm native binary
(its darwin-arm64 prebuild is broken against node 20.20.0 on
the current toolchain). Deletes src/lib/secure-sandbox.ts and
src/lib/pr-review-sandbox.ts, removes the executeJs tool from
chatTools, the secure-exec-bridge esbuild plugin from
trigger.config.ts (and its companion node-stdlib-browser-stub),
and the `secure-exec` dependency from package.json. E2B-backed
executeCode stays. If a future session needs the in-process V8
sandbox back, reintroduce through a different module (or pin a
prebuilt binary) to avoid this failure mode.
Smoke drove via the window.__chat bridge from Chrome DevTools MCP —
no click-based interaction needed.
Final Phase F cleanup — `CHAT_STREAM_KEY`, `CHAT_MESSAGES_STREAM_ID`, and `CHAT_STOP_STREAM_ID` were meaningful only when chat.agent I/O lived on run-scoped Redis streams. The Session migration moved all chat I/O onto the backing Session's `.in` / `.out` channels, so these constants stopped describing how anything is addressed months ago and have been dead-weight re-exports since. Dropped from the public surface: - `@trigger.dev/core/v3/chat-client` no longer exports the three constants. The file keeps `ChatStoreChunk` + `applyChatStorePatch` (the chat.store primitive's shared types). - `@trigger.dev/sdk/ai` no longer re-exports them via the `CHAT_STREAM_KEY` / `CHAT_MESSAGES_STREAM_ID` / `CHAT_STOP_STREAM_ID` aliases introduced by the migration commit. - Deletes `packages/trigger-sdk/src/v3/chat-constants.ts` (the shim that bridged core's definitions to the SDK's public surface). What stayed the same: - `chat.stream.id` / `chat.messages.id` / `chat.stopSignal.id` still contain the literal strings `"chat"` / `"chat-messages"` / `"chat-stop"` — inlined as opaque breadcrumbs rather than user-consumable constants. Telemetry attrs keep the same values, so dashboards/spans don't shift. - All runtime behavior is untouched. The `chatStream` / `messagesInput` / `stopInput` facades still delegate through the Session handle exactly as before; only the constant symbols are gone. Migration note for external callers: Anyone still importing the old constants should migrate to the session primitives: - `streams.writer(CHAT_STREAM_KEY, …)` → `sessions.open(sessionId).out.writer(…)` - `streams.input(CHAT_MESSAGES_STREAM_ID)` → `sessions.open(sessionId).in.on(…)` (filtered by `chunk.kind === "message"`) - `streams.input(CHAT_STOP_STREAM_ID)` → `sessions.open(sessionId).in.on(…)` (filtered by `chunk.kind === "stop"`) Validated - 86/86 SDK tests green. - Webapp typecheck clean (core types used in SpanPresenter + AgentView are untouched). - ai-chat UI smoke passes end-to-end: new chat → send "Say hi in three words." → first assistant text in 4.9s → sessionId + runId + lastEventId all set.
…ircuit TriggerChatTransport.reconnectToStream previously returned null any time state.isStreaming was falsy, which included undefined. That meant a caller who dropped isStreaming from their ChatSession persistence (a reasonable simplification now that the server can tell the client when a session is settled via X-Session-Settled on the session.out SSE) would get null on every reconnect and the UI would never resume streaming. Tighten the check to state.isStreaming === false so only an explicit false triggers the fast-path skip. Undefined now falls through to open the SSE and let the server decide — on a settled session the server already closes the connection in ~1s via wait=0, so there is no 60s hang to worry about. Backward compatible: callers who still persist and hydrate isStreaming (true/false) keep today's behavior exactly; callers who drop the flag now get the server-authoritative path.
Three dashboard-scoped stream routes were passing request.signal into realtimeStream.streamResponse. That signal is broken under Remix+Express (see apps/webapp/CLAUDE.md, nodejs/node#55428 — the chain is severed when Remix internally clones the Request), so when a user closes their dashboard tab the signal never fires. The underlying RedisRealtimeStreams.streamResponse loops while(!signal.aborted) over XREAD BLOCK and only exits on its 15s inactivity timeout; the S2 path keeps the upstream fetch open for up to its 60s wait window. Thread getRequestAbortSignal() through: - resources/orgs/.../runs/$runParam/realtime/v1/streams/$runId/$streamId - resources/orgs/.../runs/$runParam/realtime/v1/streams/$runId/input/$streamId - resources/orgs/.../playground/realtime/v1/streams/$runId/$streamId Each picks up the Express res.on('close')-backed signal that fires reliably when the downstream client disconnects.
Pulls PENDING_MESSAGE_INJECTED_TYPE, ChatTaskWirePayload, and the client-data inference helpers out of ai.ts (~7000 lines, statically imports node:* via the skills runtime) into a new ai-shared.ts that stays free of node-only imports. chat.ts and chat-react.ts now reach for these via ai-shared so browser bundlers don't trace ai.ts's entire module graph (Turbopack rejected the node: builtins outright).
The webapp's peek-tail-settled shortcut on /realtime/v1/sessions/:id/out previously fired on every io=out subscription. That race-tripped active send-a-message paths: the SSE peek would see the prior turn's trigger:turn-complete record before the newly-triggered run wrote its first chunk, return wait=0 + X-Session-Settled:true, and close the stream before any of the new turn's records landed. Make the peek opt-in via an X-Peek-Settled: 1 request header. Only TriggerChatTransport.reconnectToStream sets it (true reload-resume case where settling early is fine); sendMessages and the rest leave it off and stay on the normal long-poll. On the server side, streamResponseFromSessionStream gates the peek on options.peekSettled and skips it otherwise. - apps/webapp: read X-Peek-Settled from the request, thread to streamResponseFromSessionStream - packages/trigger-sdk/chat.ts: peekSettled option on subscribeToSessionStream + reconnectToStream sets it; sendMessages does not - docs/ai-chat/client-protocol.mdx + docs/sessions/reference.mdx: document the opt-in semantics - .server-changes/session-out-settled-signal.md: record the change
Companion to the SDK opt-in. Webapp routes read X-Peek-Settled from the request and skip the tail peek when it isn't set, so active send-a-message paths can't race a stale trigger:turn-complete. Docs note the opt-in semantics; .server-changes records the change for the deploy log.
chat.agent now runs on top of the Session-as-run-manager primitive.
Public surface (`chat.agent({...})`, `useTriggerChatTransport`,
`chat.store` / `chat.defer` / `chat.history`, `AgentChat`) is unchanged;
the wiring underneath moves from per-run streams to the durable Session
row that owns its own runs.
Transport (TriggerChatTransport):
- Drop `getStartToken`. Replace with
`startSession({chatId, taskId, clientData}) => {publicAccessToken}` —
wraps a server action that calls `chat.createStartSessionAction`.
Idempotent on `(env, externalId)`.
- `clientData` (typed via `withClientData`) is threaded through
`startSession`'s params, so the first run's `basePayload.metadata`
matches per-turn `metadata`. Live-updated via `setClientData` when
the hook's `clientData` option changes.
- Drop transport-level `triggerConfig` / `triggerOptions` /
`idleTimeoutInSeconds`. All trigger config lives server-side in the
customer's `chat.createStartSessionAction(taskId, options)`.
- `transport.preload(chatId)` and lazy first `sendMessage` both route
through `startSession`, deduped via the in-flight pendingStarts map.
- `ChatSession` persistable shape drops `runId`; just `{lastEventId}`.
chat.agent runtime:
- New `chat.createStartSessionAction(taskId, options?)` — server-side
wrapper that calls `sessions.start` with `basePayload.{messages:[],
trigger: "preload"}` defaults plus the customer's overrides. Returns
`{sessionId, runId, publicAccessToken}`.
- `chat.requestUpgrade` calls `apiClient.endAndContinueSession` before
emitting the `trigger:upgrade-required` chunk. Server orchestrates
the swap; browser keeps streaming across the run handoff.
Webapp dashboard:
- Playground: `startSession` + `accessToken` both wired through the
Remix action (idempotent server-side start path). Preload button
now works. New session proxy routes for HEAD/GET on `/out` and POST
on `/in/append`; old run-stream proxies deleted.
- Run inspector Agent tab: SSE proxy now uses the canonical addressing
key (externalId if set, else friendlyId), matching what the agent
writes via `session.out`. Fixes the case where the Agent tab read
from a different S2 stream than the agent wrote to.
References (ai-chat):
- `chat-view` useEffect dance gone (just hydrates `initialSession`).
- `chat-app` `transport.preload(id)` routes through `startSession`.
- New `upgrade-test` agent + sidebar option for exercising
`chat.requestUpgrade` end-to-end.
- `ChatSession` schema simplified: drop `runId` / `sessionId`, keep
`publicAccessToken` + `lastEventId`.
- `chat-client-test` fixed for the new transport shape.
- Hello-world smoke stubs gutted to TODO placeholders — sessions
are now task-bound, so standalone-session smokes need rewriting.
Persistent listeners registered via `session.in.on(...)` (e.g. chat.agent's `stopInput.on` for the stop signal) must not 'consume' chunks. They filter by `kind` and ignore non-matching chunks, so previously `#dispatch` was silently dropping any chunk that arrived before a once-waiter had registered. This race surfaced on test cloud (network round-trip > sync subscribe-time) but not locally (zero-latency). Symptom: chat.agent's first user message landed in S2 before `messagesInput.waitWithIdleTimeout` registered its waiter, the tail received it, `#dispatch` saw the `stopInput` handler and returned without buffering, the message was gone, the waitWithIdleTimeout fell through to a durable waitpoint, and the race-check skipped seq 0 (since the tail's onPart had advanced `lastSeqNum` to 0). Fix: when no once-waiter exists, invoke handlers AND buffer the chunk. Handlers observe; they don't consume.
…omic persist in reference onTurnComplete
- chat.createStartSessionAction now adds 'chat:{chatId}' as the first tag on the triggered run, matching the browser-mediated transport.doStart path. Customer-provided tags merge after, capped at 5. Without this, runs created via server actions were untagged, breaking the dashboard chat-id filter.
- references/ai-chat onTurnComplete persists Chat.messages and ChatSession.lastEventId in a single prisma.$transaction. Two parallel reads on the next page load (Promise.all([getChatMessages, getSessionForChat])) can otherwise observe messages post-write but lastEventId pre-write. The transport then resumes from the stale cursor and replays this turn's chunks on top of the already-persisted assistant message, duplicating the render. Applies to both the main chat.agent and the hydrated variant.
The reference's onTurnStart was using chat.defer for the messages write, which is fire-and-forget. If a user refreshed the page mid-stream, getChatMessages returned [] (the deferred write hadn't landed yet), useChat hydrated with empty initialMessages, and the resumed SSE stream pushed the assistant into an empty array — the user's message vanished from the rendered conversation forever. Switch to await prisma.chat.update(...) so the write is durable before chat.agent begins streaming. Verified end-to-end against test cloud: mid-stream refresh now yields [user, assistant] with no duplication. Aligns with the Warning added to docs/ai-chat/patterns/database-persistence.mdx in the docs branch.
…lates The reference's Chat / ChatSession Postgres tables are shared between local and test cloud targets. A row created with one webapp's PAT and lastEventId is poison if you switch the .env to the other target and reuse the same chatId — the transport gets a 401 or resumes from a sequence that doesn't exist on the other backend. Adds: - prisma/reset-chats.sql: TRUNCATE Chat, ChatSession (User survives — it's upserted by onPreload/onChatStart anyway). - package.json db:reset:chats script wrapping prisma db execute --file. Run `pnpm run db:reset:chats` between target switches and at the top of every smoke test. Codified in the ai-chat-e2e skill as a required prereq.
… panel + sendAction bridge UX cleanup discovered during the Sessions e2e sweep. Three changes, one commit because they all live in the chat input row / debug panel area: - Explicit "Preload" button next to "Send" that only renders when the chat has no messages and no session yet. Clicking calls transport.preload(chatId), which mints the session and triggers the first run with trigger:"preload". Self-hides once session is truthy. Replaces the inert "Preload new chats" sidebar checkbox (the visible `+ New Chat` button only navigated and never called transport.preload — preloadEnabled was wired through the context but read by nobody, since ChatApp.tsx is no longer the mounted chat sidebar). Drops the dead preloadEnabled state + checkbox from chat-settings-context, chat-sidebar, chat-sidebar-wrapper, and the chat-app.tsx legacy code path. - Debug panel "Runs → View in dashboard" row, gated on dashboardUrl + a new NEXT_PUBLIC_TRIGGER_PROJECT_DASHBOARD_PATH env var. Resolves to the runs-list page filtered by chat:<chatId> tag — so opening the link drops you straight into the run list for the active chat. Threads the new prop through chat-view → chat → DebugPanel. - window.__chat.sendAction(action) bridge wrapper that delegates to transport.sendAction(chatId, action). Lets smoke tests drive aiChatHydrated's actionSchema (undo/rollback/remove/replace) without reaching into React internals.
CreateSessionRequestBody now requires `taskIdentifier` and `triggerConfig` because Sessions are task-bound (the server reuses the config for every run scheduled by the session — initial + continuations). The MCP `agentChat` tool was still passing only `{ type, externalId }` from the pre-Sessions-as-run-manager API. Add `taskIdentifier: input.agentId` and a minimal `triggerConfig` with `basePayload: { chatId, ...clientData }` and the `chat:{chatId}` auto-tag.
Unblocks typecheck on PR #3173 (and Windows CLI v3 e2e, which builds cli-v3 in pre-test).
Migration 029 added `task_kind` to `task_runs_v2`, and TASK_RUN_COLUMNS was updated, but the four test-data arrays in src/taskRuns.test.ts were not. ClickHouse rejects the inserts with "Cannot parse input: expected ',' before: ']'" because the array length is one short of the column count. All 7 internal/clickhouse unit-test shards on PR #3173 fail on this. Pre-existing bug (predates my Sessions work) but blocking CI; verified the fix locally — `vitest run src/taskRuns.test.ts` now passes 4/4.
…messages: []` in basePayload
Server-to-agent flows (`AgentChat` SDK class + cli-v3 MCP `start_agent_chat`) were building `triggerConfig.basePayload` without the `trigger: "preload"` and `messages: []` fields the agent runtime branches on. Result: the auto-triggered first run had `payload.trigger === undefined`, neither `onPreload` nor `onChatStart` fired, and `onTurnStart`'s DB-write blew up with PrismaClient "No record found" because no Chat row had been created.
Browser-mediated flows already had this right (`chat.createStartSessionAction` in `ai.ts:6951`); the server-side path now mirrors that shape.
- packages/trigger-sdk/src/v3/chat-client.ts — `AgentChat.ensureStarted` adds the two fields to `basePayload`. `chat-client-test`'s `pong` orchestrator now returns the assistant text instead of an empty string.
- packages/cli-v3/src/mcp/tools/agentChat.ts — same fix on `start_agent_chat`'s `createSession` call. Also drops the redundant separate `apiClient.triggerTask(...)` call: `POST /api/v1/sessions` now auto-triggers the first run and returns its runId, so a second trigger from the MCP would have produced a competing run on the same session. Use `session.runId` from the create response. The `preload` input flag becomes a no-op signal (response message wording only) since session-create always triggers a run now.
Verified end-to-end against local:
- `chat-client-test` orchestrator returns `{ text: "pong" }`
- MCP `start_agent_chat` → `send_agent_message` x2 → `close_agent_chat` succeeds, both turns reuse the same runId
The realtime stream caps each record at ~1 MiB. Today the chat.agent path through StreamsWriterV2 surfaces a generic S2Error from deep in the batching layer when a chunk exceeds the cap, with no chunk-type context and no guidance for callers. Add a pre-write byte check in StreamsWriterV2.initializeServerStream that fires before the chunk hits the underlying batcher, and a typed ChatChunkTooLargeError carrying the chunk's discriminant (type/kind), serialized size, and cap. Also exports an isChatChunkTooLargeError guard from the SDK so callers can branch cleanly. Threshold is 1 MiB minus 1 KiB to leave headroom for the JSON record envelope. The error message links to the new docs pattern (Pattern: ID-reference for large tool outputs / out-of-band streams.writer for run-scoped data).
- typesVersions: add `ai/skills-runtime` mapping (was missing → check-exports
failed with NoResolution on `@trigger.dev/sdk/ai/skills-runtime`).
- chat.store JSON Patch: reject `__proto__`, `constructor`, `prototype`
segments at parseJsonPointer. Closes the two CodeQL prototype-pollution
alerts on chat-client.ts:108 / :120 — a malicious patch like
`{ op: "replace", path: "/__proto__/x", value: 1 }` would otherwise
walk into Object.prototype via `parent[lastToken] = value`. Throws a
clear error on the whole patch instead.
- typesVersions: add `v3/chat-client` mapping. The export was declared in `tshy.exports` and the conditional export block but missing from `typesVersions` — `attw --pack` flagged "@trigger.dev/core/v3/chat-client" as `node10: 💀 Resolution failed`. - chat.store JSON Patch: add an `assertSafeKey` guard at the assignment sites in `removeAt` / `insertAt`. parseJsonPointer already rejects `__proto__` / `constructor` / `prototype`, but CodeQL's prototype-pollution analysis doesn't trace through the parser boundary — the local check at the assignment keeps the static analysis happy and is also a real defense-in-depth backstop against any future caller that bypasses parseJsonPointer.
…SessionTriggerConfig + sync playground transport clientData Two fixes from Devin's review on PR #3173. ## SessionTriggerConfig is missing 3 fields the playground UI shows The playground sidebar (`PlaygroundSidebar`) renders working controls for `maxDuration`, `version`, and `region`. The action received the form fields, but `SessionTriggerConfig` didn't accept them so they were `void`-suppressed and silently dropped. Runs ignored the user's max-duration cap, the version pin didn't apply, and region selection had no effect. - `packages/core/src/v3/schemas/api.ts` — add three optional fields to `SessionTriggerConfig`: `maxDuration` (positive int, seconds), `lockToVersion` (string), `region` (string). All three forward to the matching field on `TaskRunOptions`. - `apps/webapp/app/services/realtime/sessionRunManager.server.ts` — extend `triggerSessionRun`'s `body.options` to thread the three fields through to `TriggerTaskService` when present. - `apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.playground.action.tsx` — fold the three form fields into `triggerConfig`; remove the `void` suppressions. ## Playground transport's clientData becomes stale after edits The route constructs `TriggerChatTransport` directly via `useRef` (to avoid the React-version mismatch the hook had). The hook normally calls `setClientData` whenever `clientData` changes, but this manual construction bypassed that — so `clientData` was captured at construction and never updated. Per-turn `metadata` merges (`this.defaultMetadata` in `packages/trigger-sdk/src/v3/chat.ts`) used the stale initial value for the whole conversation. `startSession` was already reading from the live ref so session creation was unaffected; this only fixed the per-turn path. - `apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.playground.$agentParam/route.tsx` — add a `useEffect` that calls `transport.setClientData(...)` whenever `clientDataJson` changes. Changeset (patch, @trigger.dev/core) for the schema additions; server- changes file for the webapp-only behaviour fix.
Roll up all the chat.agent feature work that's been accumulating on this branch into 8 user-facing CHANGELOG entries. No behavior change — just tidying up the .changeset/ directory before merge. Final shape: - chat-agent.md (sdk minor + core patch) — the headline; folds 13: ai-sdk-chat-transport, ai-chat-sandbox-and-ctx, chat-agent-*, chat-customagent-session-binding-and-stop-fixes, chat-reconnect-isstreaming-optional, chat-run-pat-renewal, chat-store-primitive, chat-transport-session-renew-plus-preload, drop-legacy-chat-stream-constants, dry-sloths-divide, trigger-chat-transport-watch-mode. - sessions-primitive.md (core + sdk patch) — folds 3: session-primitive, session-sdk-toolkit, session-trigger-config-extra-fields. - agent-skills.md (sdk + core + build + cli patch) — folds 2: chat-agent-skills-phase-1, skills-runtime-subpath. - ai-tool-helpers.md (sdk patch) — folds 2: ai-tool-execute-helper, ai-tool-toolset-typing. - mock-chat-agent-test-harness.md (sdk + core patch) — folds 3: mock-chat-agent-test-harness, mock-task-context-test-infra, mock-chat-agent-setup-locals. - mcp-agent-chat-sessions.md (cli patch) — kept standalone. - add-is-replay-context.md (core patch) — kept standalone (general task feature). - truncate-error-stacks.md (core patch) — kept standalone (general infra). Bumps preserved (chat-agent stays minor on sdk; everything else patch). Auto-named "dry-sloths-divide" got merged into chat-agent and dropped.
The previous pass rolled 26 changesets into 8 but the consolidated descriptions read like docs (full API surface dumps, multiple sections, docs-style headers). Rewrote each so they fit a release-notes bullet list — short, what-shipped framing, with one or two snippets where they help, no exhaustive type / option enumeration.
- inline prototype-pollution guards at JSON Patch assignment sites in chat-client.ts so CodeQL can statically verify them (Set.has() check upstream wasn't being traced) - wrap JSON.parse(payloadStr) in playground action's start handler to return 400 on malformed JSON instead of 500
Replace the legacy 5-attempt retry cap on SSEStreamSubscription with
indefinite retry on a bounded jittered backoff. Adds a force-reconnect
path so the chat transport can recover from silent-dead-socket cases
on mobile (background-kill, bfcache restore) without waiting for the
next backoff slot.
SSEStreamSubscription:
- maxRetries default Infinity (was 5), retryDelayMs 100ms (was 1s),
new maxRetryDelayMs cap (5s), retryJitter 50%
- retryNow(): wake an in-flight backoff
- forceReconnect(): drop current connection AND wake backoff
- fetchTimeoutMs (30s default): aborts stuck connect attempts that
block forever on dead sockets
- stallTimeoutMs (opt-in): force reconnect on silent reader
- nonRetryableStatuses (default [404, 410]): short-circuit retry
for stream-gone / session-closed
- Fixed listener leak where each retry accumulated an abort listener
on the user signal because finally only ran once the recursion
unwound. Cleanup now runs per-attempt via cleanupAttempt() in both
the catch (before recursion) and finally paths.
TriggerChatTransport (browser):
- online -> forceReconnect (existing socket may be stale)
- pageshow.persisted -> forceReconnect (Safari bfcache restore)
- visibilitychange -> visible only:
* hidden >= 30s -> forceReconnect
* hidden < 30s -> retryNow (cheap wake)
- stallTimeoutMs: 60s (sized over typical agent thinking pauses)
Tests: 13 vitest cases covering retry-past-legacy-cap, backoff cap,
jitter variance, retryNow short-circuit, abort-during-backoff,
forceReconnect during fetch and during read (verifies Last-Event-ID
resume on the resumed request), fetchTimeout, stallTimeout, 404/410
short-circuit, custom nonRetryableStatuses, 503 still retries.
Refs TRI-8903.
6a65ad8 to
6ecf1d7
Compare
No description provided.